Log-Gamma distribution#
The log-gamma distribution (SciPy: scipy.stats.loggamma) is a continuous distribution on the real line.
It appears naturally as the logarithm of a Gamma random variable:
1) Title & classification#
Item |
Value |
|---|---|
Name |
Log-Gamma ( |
Type |
Continuous |
Support |
\(x \in \mathbb{R}\) |
Parameters (SciPy) |
shape \(c>0\), location \(\mathrm{loc}\in\mathbb{R}\), scale \(s>0\) |
Standard form |
\(\mathrm{loc}=0,\ s=1\) |
What you’ll be able to do after this notebook#
write down the PDF/CDF and recognize the Gamma ↔ log-gamma link
compute mean/variance/skewness/kurtosis, MGF/CF, and entropy
interpret how the shape parameter \(c\) changes the distribution
derive the log-likelihood and the MLE condition for \(c\)
sample from log-gamma using a NumPy-only Gamma sampler (Marsaglia–Tsang) and a log transform
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from scipy import optimize, stats
from scipy.special import gammainc, gammaln, loggamma as log_gamma, polygamma, psi
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=4, suppress=True)
rng = np.random.default_rng(42)
2) Intuition & motivation#
What it models#
Log-gamma models a real-valued quantity that is best understood as the log of a positive gamma-like variable. If \(X\sim\mathrm{LogGamma}(c)\), then \(\exp(X)\) is Gamma-distributed.
Because the right tail in \(x\) decays like \(\exp(-e^x)\), log-gamma has an extremely light right tail, while the left tail decays only exponentially (roughly \(\exp(c x)\) as \(x\to-\infty\)). So it is typically left-skewed (negative skewness).
Typical real-world use cases#
Log of waiting times / lifetimes when the original scale is Gamma-like (queueing, reliability)
Log of chi-square–like quantities (chi-square is a Gamma special case)
Extreme value modeling: when \(c=1\), \(-X\) is standard Gumbel
Bayesian modeling: a Gamma prior on a positive rate/scale implies a log-gamma prior on its log
Relations to other distributions#
If \(Y\sim\mathrm{Gamma}(c,1)\), then \(\log Y\sim\mathrm{LogGamma}(c)\).
If \(c=1\) and \(X\sim\mathrm{LogGamma}(1)\), then \(-X\sim\mathrm{Gumbel}(0,1)\).
For large \(c\), \(X\) is approximately Normal with mean \(\psi(c)\) and variance \(\psi_1(c)\) (digamma/trigamma).
3) Formal definition#
We describe the standard distribution first (SciPy with loc=0, scale=1), then the location–scale form.
Let \(c>0\) and \(X\sim\mathrm{LogGamma}(c)\).
PDF#
For \(x\in\mathbb{R}\):
Equivalently, the log-density is
CDF#
Using the lower incomplete gamma function \(\gamma\) and its regularized form \(P\):
SciPy exposes the regularized function as scipy.special.gammainc.
Location–scale form (SciPy parameters)#
If \(Z\sim\mathrm{LogGamma}(c)\) (standard) and
then \(X\) has PDF
and CDF
def loggamma_logpdf(x, c, loc=0.0, scale=1.0):
'''Log-PDF of the log-gamma distribution (SciPy parameterization).
Standard form (loc=0, scale=1):
log f(x|c) = c x - exp(x) - log Gamma(c)
Location-scale: x = loc + scale * z.
'''
x = np.asarray(x, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
z = (x - loc) / scale
logpdf = np.empty_like(z, dtype=float)
# For very large z, exp(z) overflows and the density is effectively 0.
z_hi = z > 709
logpdf[z_hi] = -np.inf
logpdf[~z_hi] = c * z[~z_hi] - np.exp(z[~z_hi]) - gammaln(c) - np.log(scale)
return logpdf
def loggamma_pdf(x, c, loc=0.0, scale=1.0):
return np.exp(loggamma_logpdf(x, c=c, loc=loc, scale=scale))
def loggamma_cdf(x, c, loc=0.0, scale=1.0):
'''CDF via the regularized incomplete gamma P(c, exp(z)).
For very large z, exp(z) is huge and the CDF is numerically 1.
'''
x = np.asarray(x, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
z = (x - loc) / scale
out = np.empty_like(z, dtype=float)
z_hi = z > 709
out[z_hi] = 1.0
out[~z_hi] = gammainc(c, np.exp(z[~z_hi]))
return out
def loggamma_stats(c, loc=0.0, scale=1.0):
'''Mean/variance/skewness/excess kurtosis/entropy for log-gamma.
Uses polygamma identities for the log of a Gamma variable.
'''
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
digamma = psi(c)
trigamma = polygamma(1, c)
pg2 = polygamma(2, c)
pg3 = polygamma(3, c)
mean = loc + scale * digamma
var = (scale**2) * trigamma
skew = pg2 / (trigamma**1.5)
excess_kurt = pg3 / (trigamma**2)
entropy = np.log(scale) + c + gammaln(c) - c * digamma
return mean, var, skew, excess_kurt, entropy
def loggamma_mgf(t, c, loc=0.0, scale=1.0):
'''MGF M_X(t) = E[e^{tX}] for real t where it exists.
Standard: M(t) = Gamma(c+t)/Gamma(c), domain t>-c.
Location-scale: M(t) = exp(loc*t) * Gamma(c+scale*t)/Gamma(c), domain t>-c/scale.
'''
t = np.asarray(t, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
a = c + scale * t
out = np.full_like(t, np.nan, dtype=float)
valid = a > 0
out[valid] = np.exp(loc * t[valid] + gammaln(a[valid]) - gammaln(c))
return out
def loggamma_cf(t, c, loc=0.0, scale=1.0):
'''Characteristic function phi(t) = E[e^{i t X}].
Standard: phi(t) = Gamma(c+i t)/Gamma(c).
Location-scale: phi(t) = exp(i loc t) * Gamma(c+i scale t)/Gamma(c).
'''
t = np.asarray(t, dtype=float)
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
return np.exp(1j * loc * t + (log_gamma(c + 1j * scale * t) - gammaln(c)))
4) Moments & properties#
A useful identity is that if \(Y\sim\mathrm{Gamma}(c,1)\) and \(X=\log Y\), then derivatives of \(\log\Gamma\) show up everywhere.
For the standard log-gamma \(X\sim\mathrm{LogGamma}(c)\) (loc=0, scale=1):
Quantity |
Value |
|---|---|
Mean |
\(\mathbb{E}[X]=\psi(c)\) |
Variance |
\(\mathrm{Var}(X)=\psi_1(c)\) |
Skewness |
\(\gamma_1=\dfrac{\psi_2(c)}{\psi_1(c)^{3/2}}\) (negative) |
Excess kurtosis |
\(\gamma_2=\dfrac{\psi_3(c)}{\psi_1(c)^2}\) |
Mode |
\(\log c\) |
MGF |
\(M(t)=\dfrac{\Gamma(c+t)}{\Gamma(c)}\), for \(t>-c\) |
CF |
\(\varphi(t)=\dfrac{\Gamma(c+i t)}{\Gamma(c)}\) |
Entropy |
\(H=c+\log\Gamma(c)-c\,\psi(c)\) |
Here \(\psi\) is the digamma function, and \(\psi_k\) are polygamma functions (derivatives of digamma).
For the location–scale form \(X=\mathrm{loc}+sZ\) with \(Z\sim\mathrm{LogGamma}(c)\):
Mean: \(\mathbb{E}[X]=\mathrm{loc}+s\,\psi(c)\)
Variance: \(\mathrm{Var}(X)=s^2\,\psi_1(c)\)
Skewness/excess kurtosis: unchanged (they’re shape-only)
Entropy: \(H(X)=H(Z)+\log s\)
Cumulants are especially clean: for the standard distribution, the cumulant generating function is \(K(t)=\log M(t)=\log\Gamma(c+t)-\log\Gamma(c)\), so the \(n\)-th cumulant is \(\kappa_n=\psi_{n-1}(c)\).
c_demo, loc_demo, scale_demo = 2.8, -0.3, 1.2
mean_th, var_th, skew_th, exkurt_th, entropy_th = loggamma_stats(c_demo, loc=loc_demo, scale=scale_demo)
dist_demo = stats.loggamma(c_demo, loc=loc_demo, scale=scale_demo)
samples_demo = dist_demo.rvs(size=250_000, random_state=rng)
mean_mc = samples_demo.mean()
var_mc = samples_demo.var()
skew_mc = stats.skew(samples_demo)
exkurt_mc = stats.kurtosis(samples_demo, fisher=True)
entropy_mc = dist_demo.entropy()
np.array(
[
["mean", mean_th, mean_mc],
["variance", var_th, var_mc],
["skewness", skew_th, skew_mc],
["excess kurtosis", exkurt_th, exkurt_mc],
["entropy", entropy_th, entropy_mc],
],
dtype=object,
)
array([['mean', 0.7086563866193003, 0.7093708477579757],
['variance', 0.6167983135580041, 0.6203395972311763],
['skewness', -0.645416386779895, -0.6499426099932761],
['excess kurtosis', 0.8224594703414813, 0.8131285555641372],
['entropy', 1.1454927800033334, 1.1454927800033339]], dtype=object)
5) Parameter interpretation#
Shape \(c\)#
Controls the location and spread through \(\psi(c)\) and \(\psi_1(c)\).
For small \(c\), the distribution is more spread out and strongly left-skewed.
As \(c\) increases, \(\psi(c)\approx \log c\) and \(\psi_1(c)\approx 1/c\), so the distribution becomes more concentrated and closer to Normal.
Two useful reference points:
Mode (standard): \(\log c\)
Mean (standard): \(\psi(c)\)
Location loc#
Shifts the distribution left/right: \(X=\mathrm{loc}+Z\).
Scale \(s\)#
Stretches the distribution: \(X= s Z\) (then shift by loc).
Variance scales like \(s^2\) and entropy increases by \(\log s\).
# PDF shape changes with c (standard loc=0, scale=1)
c_values = [0.5, 1.0, 2.0, 5.0, 15.0]
x = np.linspace(-12, 6, 900)
fig = go.Figure()
for c in c_values:
fig.add_trace(go.Scatter(x=x, y=loggamma_pdf(x, c), mode="lines", name=f"c={c:g}"))
fig.update_layout(
title="Log-gamma PDF (loc=0, scale=1): effect of the shape c",
xaxis_title="x",
yaxis_title="pdf",
)
fig.show()
# Mean and mode as functions of c
c_grid = np.linspace(0.15, 20, 400)
mean_grid = psi(c_grid)
mode_grid = np.log(c_grid)
sd_grid = np.sqrt(polygamma(1, c_grid))
fig = go.Figure()
fig.add_trace(go.Scatter(x=c_grid, y=mean_grid, mode="lines", name="mean ψ(c)"))
fig.add_trace(go.Scatter(x=c_grid, y=mode_grid, mode="lines", name="mode log c"))
fig.add_trace(go.Scatter(x=c_grid, y=sd_grid, mode="lines", name="sd sqrt(ψ₁(c))"))
fig.update_layout(
title="How moments change with c (standard log-gamma)",
xaxis_title="c",
yaxis_title="value",
)
fig.show()
6) Derivations#
We’ll derive expectation, variance, and the likelihood for the standard case (loc=0, scale=1).
Expectation via the MGF#
Let \(Y\sim\mathrm{Gamma}(c,1)\) and \(X=\log Y\). Then
Using the Gamma density \(f_Y(y)=\dfrac{1}{\Gamma(c)}y^{c-1}e^{-y}\) for \(y>0\):
So \(M_X(t)=\Gamma(c+t)/\Gamma(c)\) and the cumulant generating function is
Differentiate at \(t=0\):
Variance#
Similarly,
(The polygamma \(\psi_1\) is the trigamma function.)
Likelihood (shape parameter \(c\))#
For i.i.d. samples \(x_1,\dots,x_n\) from the standard log-gamma, the log-likelihood is
Differentiate with respect to \(c\) to get the score:
The MLE \(\hat c\) solves \(\psi(\hat c)=\bar x\) and can be found by 1D root finding (no closed form).
def loggamma_loglik_c(x, c):
'''Log-likelihood for c in the standard log-gamma (loc=0, scale=1).
l(c) = c * sum(x) - sum(exp(x)) - n * log Gamma(c)
'''
x = np.asarray(x, dtype=float)
c = float(c)
if c <= 0:
return -np.inf
return c * np.sum(x) - np.sum(np.exp(x)) - x.size * gammaln(c)
def loggamma_score_c(x, c):
'''Score d/dc loglik for the standard log-gamma.'''
x = np.asarray(x, dtype=float)
c = float(c)
if c <= 0:
return np.nan
return np.sum(x) - x.size * psi(c)
def mle_c_from_sample(x):
'''Compute the MLE for c when loc=0 and scale=1 is assumed.
Uses the fact that the MLE solves psi(c) = mean(x).
'''
x = np.asarray(x, dtype=float)
m = float(x.mean())
lo = 1e-10
hi = 1.0
while psi(hi) < m:
hi *= 2.0
if hi > 1e8:
raise RuntimeError("Failed to bracket the root for c.")
sol = optimize.root_scalar(lambda cc: psi(cc) - m, bracket=(lo, hi), method="brentq")
if not sol.converged:
raise RuntimeError("Root finding did not converge.")
return float(sol.root)
c_true = 3.0
x = stats.loggamma(c_true, loc=0, scale=1).rvs(size=4000, random_state=rng)
c_hat = mle_c_from_sample(x)
c_hat, c_true
(3.0014571989242746, 3.0)
7) Sampling & simulation#
A simple and efficient sampling route uses the transformation
So the core task is sampling a Gamma random variable. We’ll implement the Marsaglia–Tsang rejection sampler for \(\mathrm{Gamma}(c,1)\) using NumPy only, then take logs to get log-gamma samples.
Marsaglia–Tsang (for shape \(\alpha\ge 1\)):
Set \(d=\alpha-1/3\), \(c=1/\sqrt{9d}\).
Repeat: draw \(Z\sim\mathcal{N}(0,1)\), set \(V=(1+cZ)^3\), draw \(U\sim\mathrm{Unif}(0,1)\).
Accept \(dV\) via a squeeze test or a log-acceptance test.
For \(0<\alpha<1\), use a reduction: if \(G\sim\mathrm{Gamma}(\alpha+1,1)\) and \(U\sim\mathrm{Unif}(0,1)\), then \(G\,U^{1/\alpha}\sim\mathrm{Gamma}(\alpha,1)\).
def _gamma_rvs_mt_shape_ge_1(alpha, n, rng):
'''Marsaglia–Tsang sampler for Gamma(alpha, 1) with alpha >= 1.'''
d = alpha - 1.0 / 3.0
c = 1.0 / np.sqrt(9.0 * d)
out = np.empty(n, dtype=float)
filled = 0
while filled < n:
m = n - filled
z = rng.normal(size=m)
v = (1.0 + c * z) ** 3
valid = v > 0
if not np.any(valid):
continue
z = z[valid]
v = v[valid]
u = rng.random(size=v.size)
accept = (u < 1.0 - 0.0331 * (z**4)) | (
np.log(u) < 0.5 * z**2 + d * (1.0 - v + np.log(v))
)
accepted = d * v[accept]
k = accepted.size
out[filled : filled + k] = accepted
filled += k
return out
def gamma_rvs_numpy(alpha, theta=1.0, size=1, rng=None):
'''Sample from Gamma(alpha, theta) using NumPy only.'''
alpha = float(alpha)
theta = float(theta)
if alpha <= 0:
raise ValueError("alpha must be > 0")
if theta <= 0:
raise ValueError("theta must be > 0")
if rng is None:
rng = np.random.default_rng()
n = int(np.prod(size))
if alpha >= 1:
g = _gamma_rvs_mt_shape_ge_1(alpha, n, rng)
else:
g = _gamma_rvs_mt_shape_ge_1(alpha + 1.0, n, rng)
u = rng.random(size=n)
g = g * (u ** (1.0 / alpha))
return (theta * g).reshape(size)
def loggamma_rvs_numpy(c, loc=0.0, scale=1.0, size=1, rng=None):
'''Sample from log-gamma using NumPy only (via Gamma + log + loc/scale).'''
c = float(c)
loc = float(loc)
scale = float(scale)
if c <= 0:
raise ValueError("c must be > 0")
if scale <= 0:
raise ValueError("scale must be > 0")
if rng is None:
rng = np.random.default_rng()
y = gamma_rvs_numpy(c, theta=1.0, size=size, rng=rng)
z = np.log(y)
return loc + scale * z
# Quick check: NumPy-only sampler vs SciPy
c_check = 2.2
s_np = loggamma_rvs_numpy(c_check, size=50_000, rng=rng)
s_sp = stats.loggamma(c_check).rvs(size=50_000, random_state=rng)
np.mean(s_np) - np.mean(s_sp), np.var(s_np) - np.var(s_sp)
(-0.0010786593035603254, 0.004672665089558881)
8) Visualization#
We’ll visualize:
the PDF and how it compares to a Monte Carlo histogram
the CDF and how it compares to an empirical CDF
We’ll use the NumPy-only sampler from the previous section.
def ecdf(samples):
x = np.sort(np.asarray(samples, dtype=float))
y = np.arange(1, x.size + 1) / x.size
return x, y
c_viz, loc_viz, scale_viz = 2.8, 0.0, 1.0
n_viz = 120_000
samples = loggamma_rvs_numpy(c_viz, loc=loc_viz, scale=scale_viz, size=n_viz, rng=rng)
x_lo, x_hi = np.quantile(samples, [0.001, 0.999])
x_grid = np.linspace(float(x_lo), float(x_hi), 700)
# PDF + histogram
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=samples,
nbinsx=80,
histnorm="probability density",
name="Monte Carlo samples",
opacity=0.55,
)
)
fig.add_trace(
go.Scatter(
x=x_grid,
y=loggamma_pdf(x_grid, c_viz, loc=loc_viz, scale=scale_viz),
mode="lines",
name="Theoretical PDF",
line=dict(width=3),
)
)
fig.update_layout(
title=f"Log-gamma(c={c_viz:g}) (loc={loc_viz:g}, scale={scale_viz:g}): histogram vs PDF",
xaxis_title="x",
yaxis_title="density",
bargap=0.02,
)
fig.show()
# CDF + empirical CDF
xs, ys = ecdf(samples)
fig = go.Figure()
fig.add_trace(
go.Scatter(
x=x_grid,
y=loggamma_cdf(x_grid, c_viz, loc=loc_viz, scale=scale_viz),
mode="lines",
name="Theoretical CDF",
line=dict(width=3),
)
)
fig.add_trace(
go.Scatter(
x=xs,
y=ys,
mode="markers",
name="Empirical CDF",
marker=dict(size=3),
opacity=0.6,
)
)
fig.update_layout(
title=f"Log-gamma(c={c_viz:g}): CDF vs empirical CDF",
xaxis_title="x",
yaxis_title="cdf",
)
fig.show()
9) SciPy integration (scipy.stats.loggamma)#
SciPy’s implementation is scipy.stats.loggamma.
The shape parameter is passed positionally (often written as
c).locandscaleare the usual location/scale transformation.
Mapping:
Common methods:
pdf,logpdf,cdf,ppfrvs(sampling)fit(MLE fit of parameters)
c_true, loc_true, scale_true = 3.5, -0.2, 1.4
dist = stats.loggamma(c_true, loc=loc_true, scale=scale_true)
x = np.linspace(dist.ppf(0.001), dist.ppf(0.999), 600)
pdf = dist.pdf(x)
cdf = dist.cdf(x)
samples = dist.rvs(size=8000, random_state=rng)
# Unconstrained MLE
c_hat, loc_hat, scale_hat = stats.loggamma.fit(samples)
(c_hat, loc_hat, scale_hat)
(3.6194629453741767, -0.2601228248649343, 1.4124450567933038)
10) Statistical use cases#
Hypothesis testing / goodness of fit#
Use a Q–Q plot against a fitted log-gamma to visually check fit in the tails.
Use distributional tests like Kolmogorov–Smirnov (KS) for a quick check.
Caveat: if you estimate parameters from the same data, the classical KS p-value is not exact.
Bayesian modeling#
If you put a Gamma prior on a positive parameter (like a Poisson rate \(\lambda\)):
then \(\log\lambda\sim\mathrm{LogGamma}(c)\) automatically. So log-gamma can be a convenient way to reason about priors on the log scale while keeping Gamma conjugacy on the original scale.
Generative modeling#
Generate \(X\sim\mathrm{LogGamma}(c)\) to model real-valued, left-skewed latent variables with a super-light right tail.
Or generate \(Y=\exp(X)\), which is Gamma-distributed, if you need a positive variable with flexible right-skew on the original scale.
# Hypothesis test demo: fit log-gamma and run a KS test + Q–Q plot
x_obs = stats.loggamma(2.4, loc=0.3, scale=0.9).rvs(size=600, random_state=rng)
c_fit, loc_fit, scale_fit = stats.loggamma.fit(x_obs)
dist_fit = stats.loggamma(c_fit, loc=loc_fit, scale=scale_fit)
D, p_value = stats.kstest(x_obs, dist_fit.cdf)
(D, p_value, c_fit, loc_fit, scale_fit)
# Q–Q plot
probs = np.linspace(0.01, 0.99, 200)
theo_q = dist_fit.ppf(probs)
samp_q = np.quantile(x_obs, probs)
mn = float(min(theo_q.min(), samp_q.min()))
mx = float(max(theo_q.max(), samp_q.max()))
fig = go.Figure()
fig.add_trace(go.Scatter(x=theo_q, y=samp_q, mode="markers", name="quantiles"))
fig.add_trace(go.Scatter(x=[mn, mx], y=[mn, mx], mode="lines", name="y=x"))
fig.update_layout(
title="Q–Q plot: sample vs fitted log-gamma",
xaxis_title="theoretical quantiles",
yaxis_title="sample quantiles",
)
fig.show()
# Bayesian / generative modeling demo: Gamma prior on lambda => log-lambda is log-gamma
c_prior = 4.0
lambda_samples = gamma_rvs_numpy(c_prior, theta=1.0, size=120_000, rng=rng)
log_lambda = np.log(lambda_samples)
x_lo, x_hi = np.quantile(log_lambda, [0.001, 0.999])
x_grid = np.linspace(float(x_lo), float(x_hi), 700)
fig = go.Figure()
fig.add_trace(
go.Histogram(
x=log_lambda,
nbinsx=80,
histnorm="probability density",
name="log(lambda) where lambda~Gamma(c,1)",
opacity=0.55,
)
)
fig.add_trace(
go.Scatter(
x=x_grid,
y=loggamma_pdf(x_grid, c_prior),
mode="lines",
name="Log-gamma(c) PDF",
line=dict(width=3),
)
)
fig.update_layout(
title=f"If lambda~Gamma(c,1), then log(lambda)~LogGamma(c) (c={c_prior:g})",
xaxis_title="x = log(lambda)",
yaxis_title="density",
)
fig.show()
11) Pitfalls#
Invalid parameters: require \(c>0\) and
scale > 0.MGF domain: \(M(t)=\Gamma(c+t)/\Gamma(c)\) exists only for \(t>-c\) (or \(t>-c/\mathrm{scale}\) in the location–scale form).
Overflow/underflow: computing \(\exp(x)\) can overflow for large \(x\). Use
logpdfwhen possible.Fitting sensitivity: unconstrained
fitestimatesloc/scaletoo; if you knowloc=0andscale=1, constrain them (e.g.floc=0, fscale=1) to reduce variance.Goodness-of-fit p-values after fitting: tests like KS are not distribution-free when parameters are estimated from the same data; use simulation/bootstrapping for calibrated p-values if needed.
12) Summary#
Log-gamma is the distribution of \(\log Y\) where \(Y\sim\mathrm{Gamma}(c,1)\).
Standard PDF: \(f(x\mid c)=\exp(c x-e^x)/\Gamma(c)\) on \(\mathbb{R}\).
CDF: \(F(x\mid c)=P(c,e^x)\) (regularized incomplete gamma).
Moments come from polygamma: mean \(\psi(c)\), variance \(\psi_1(c)\), and higher cumulants \(\psi_{n-1}(c)\).
Sampling is easy: sample Gamma (Marsaglia–Tsang) then take logs; apply
loc/scaleas an affine transform.